15 research outputs found
Finding a low-rank basis in a matrix subspace
For a given matrix subspace, how can we find a basis that consists of
low-rank matrices? This is a generalization of the sparse vector problem. It
turns out that when the subspace is spanned by rank-1 matrices, the matrices
can be obtained by the tensor CP decomposition. For the higher rank case, the
situation is not as straightforward. In this work we present an algorithm based
on a greedy process applicable to higher rank problems. Our algorithm first
estimates the minimum rank by applying soft singular value thresholding to a
nuclear norm relaxation, and then computes a matrix with that rank using the
method of alternating projections. We provide local convergence results, and
compare our algorithm with several alternative approaches. Applications include
data compression beyond the classical truncated SVD, computing accurate
eigenvectors of a near-multiple eigenvalue, image separation and graph
Laplacian eigenproblems
On orthogonal tensors and best rank-one approximation ratio
As is well known, the smallest possible ratio between the spectral norm and
the Frobenius norm of an matrix with is and
is (up to scalar scaling) attained only by matrices having pairwise orthonormal
rows. In the present paper, the smallest possible ratio between spectral and
Frobenius norms of tensors of order , also
called the best rank-one approximation ratio in the literature, is
investigated. The exact value is not known for most configurations of . Using a natural definition of orthogonal tensors over the real
field (resp., unitary tensors over the complex field), it is shown that the
obvious lower bound is attained if and only if a
tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal
or unitary tensors exist depends on the dimensions and the
field. A connection between the (non)existence of real orthogonal tensors of
order three and the classical Hurwitz problem on composition algebras can be
established: existence of orthogonal tensors of size
is equivalent to the admissibility of the triple to the Hurwitz
problem. Some implications for higher-order tensors are then given. For
instance, real orthogonal tensors of order
do exist, but only when . In the complex case, the situation is
more drastic: unitary tensors of size with exist only when . Finally, some numerical illustrations
for spectral norm computation are presented
A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity
In monotone submodular function maximization, approximation guarantees based on the curvature of the objective function have been extensively studied in the literature. However, the notion of curvature is often pessimistic, and we rarely obtain improved approximation guarantees, even for very simple objective functions.
In this paper, we provide a novel approximation guarantee by extracting an M^{natural}-concave function h:2^E -> R_+, a notion in discrete convex analysis, from the objective function f:2^E -> R_+. We introduce a novel notion called the M^{natural}-concave curvature of a given set function f, which measures how much f deviates from an M^{natural}-concave function, and show that we can obtain a (1-gamma/e-epsilon)-approximation to the problem of maximizing f under a cardinality constraint in polynomial time, where gamma is the value of the M^{natural}-concave curvature and epsilon > 0 is an arbitrary constant. Then, we show that we can obtain nontrivial approximation guarantees for various problems by applying the proposed algorithm
Optimal algorithms for group distributionally robust optimization and beyond
Distributionally robust optimization (DRO) can improve the robustness and
fairness of learning methods. In this paper, we devise stochastic algorithms
for a class of DRO problems including group DRO, subpopulation fairness, and
empirical conditional value at risk (CVaR) optimization. Our new algorithms
achieve faster convergence rates than existing algorithms for multiple DRO
settings. We also provide a new information-theoretic lower bound that implies
our bounds are tight for group DRO. Empirically, too, our algorithms outperform
known method
Algebraic combinatorial optimization on the degree of determinants of noncommutative symbolic matrices
We address the computation of the degrees of minors of a noncommutative
symbolic matrix of form where are matrices over a
field , are noncommutative variables, are integer
weights, and is a commuting variable specifying the degree. This problem
extends noncommutative Edmonds' problem (Ivanyos et al. 2017), and can
formulate various combinatorial optimization problems. Extending the study by
Hirai 2018, and Hirai, Ikeda 2022, we provide novel duality theorems and
polyhedral characterization for the maximum degrees of minors of of all
sizes, and develop a strongly polynomial-time algorithm for computing them.
This algorithm is viewed as a unified algebraization of the classical Hungarian
method for bipartite matching and the weight-splitting algorithm for linear
matroid intersection. As applications, we provide polynomial-time algorithms
for weighted fractional linear matroid matching and linear optimization over
rank-2 Brascamp-Lieb polytopes
機械学習と通信のための劣モジュラ・スパース最適化手法
学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 岩田 覚, 東京大学教授 定兼 邦彦, 東京大学教授 山本 博資, 東京大学准教授 武田 朗子, 東京大学准教授 平井 広志University of Tokyo(東京大学